Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Stomach cancer image segmentation method based on EfficientNetV2 and object-contextual representation
Di ZHOU, Zili ZHANG, Jia CHEN, Xinrong HU, Ruhan HE, Jun ZHANG
Journal of Computer Applications    2023, 43 (9): 2955-2962.   DOI: 10.11772/j.issn.1001-9081.2022081159
Abstract379)   HTML19)    PDF (4902KB)(202)       Save

In view of the problems that the upsampling process of U-Net is easy to lose details, and the datasets of stomach cancer pathological image are generally small, which tends to lead to over-fitting, an automatic segmentation model for pathological images of stomach cancer based on improved U-Net was proposed, namely EOU-Net. In EOU-Net, based on the existing U-Net model, EfficientNetV2 was used as the backbone, thereby enhancing the feature extraction ability of the network encoder. In the decoding stage, the relations between cell pixels were explored on the basis of Object-Contextual Representation (OCR), and the improved OCR module was used to solve the loss problem of the upsampled image details. Then, the post-processing of Test Time Augmentation (TTA) was used to predict the images obtained by rollover and rotations at different angles of the input image respectively, and then the prediction results of these images were combined by feature fusion to further optimize the output results of the network, thereby solving the problem of small medical datasets effectively. Experimental results on datasets SEED, BOT and PASCAL VOC 2012 show that the Mean Intersection over Union (MIoU) of EOU-Net is improved by 1.8, 0.6 and 4.5 percentage points respectively compared with that of OCRNet. It can be seen that EOU-Net can obtain more accurate segmentation results of stomach cancer images.

Table and Figures | Reference | Related Articles | Metrics
Pedestrian trajectory prediction based on multi-head soft attention graph convolutional network
Tao PENG, Yalong KANG, Feng YU, Zili ZHANG, Junping LIU, Xinrong HU, Ruhan HE, Li LI
Journal of Computer Applications    2023, 43 (3): 736-743.   DOI: 10.11772/j.issn.1001-9081.2022020207
Abstract359)   HTML15)    PDF (5673KB)(172)    PDF(mobile) (2752KB)(31)    Save

The complexity of pedestrian interaction is a challenge for pedestrian trajectory prediction, and the existing algorithms are difficult to capture meaningful interaction information between pedestrians, which cannot intuitively model the interaction between pedestrians. To address this problem, a multi-head soft attention graph convolutional network was proposed. Firstly, a Multi-head Soft ATTention (MS ATT) combined with involution network was used to extract sparse spatial adjacency matrix and sparse temporal adjacency matrix from spatial and temporal graph inputs respectively to generate sparse spatial directed graph and sparse temporal directed graph. Then, a Graph Convolutional Network (GCN) was used to learn interaction and motion trend features from sparse spatial and sparse temporal directed graphs. Finally, the learned trajectory features were input into a Temporal Convolutional Network (TCN) to predict double Gaussian distribution parameters, thereby generating the predicted pedestrian trajectories. Experiments on Eidgenossische Technische Hochschule (ETH) and University of CYprus (UCY) datasets show that, compared with Space-time sOcial relationship pooling pedestrian trajectory Prediction Model (SOPM), the proposed algorithm reduces the Average Displacement Error (ADE) by 2.78%, and compared to Sparse Graph Convolution Network (SGCN), the proposed algorithm reduces the Final Displacement Error (FDE) by 16.92%.

Table and Figures | Reference | Related Articles | Metrics